Recent work has shown that fine-tuning large pre-trained language models on a collection of tasks described via instructions, a.k.a. instruction-tuning, improves their zero and few-shot generalization to unseen tasks. However, there is a limited understanding of the performance trade-offs of different decisions made during the instruction-tuning process. These decisions include the scale and diversity of the instruction-tuning benchmark, different task sampling strategies, fine-tuning with and without demonstrations, training using specialized datasets for reasoning and dialogue, and finally, the fine-tuning objectives themselves. In this paper, we characterize the effect of instruction-tuning decisions on downstream task performance when scaling both model and benchmark sizes. To this end, we create OPT-IML Bench: a large benchmark for Instruction Meta-Learning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks, and prepare an evaluation framework to measure three types of model generalizations: to tasks from fully held-out categories, to held-out tasks from seen categories, and to held-out instances from seen tasks. Through the lens of this framework, we first present insights about instruction-tuning decisions as applied to OPT-30B and further exploit these insights to train OPT-IML 30B and 175B, which are instruction-tuned versions of OPT. OPT-IML demonstrates all three generalization abilities at both scales on four different evaluation benchmarks with diverse tasks and input formats -- PromptSource, FLAN, Super-NaturalInstructions, and UnifiedSKG. Not only does it significantly outperform OPT on all benchmarks but is also highly competitive with existing models fine-tuned on each specific benchmark. We release OPT-IML at both scales, together with the OPT-IML Bench evaluation framework.
translated by 谷歌翻译
The processing and recognition of geoscience images have wide applications. Most of existing researches focus on understanding the high-quality geoscience images by assuming that all the images are clear. However, in many real-world cases, the geoscience images might contain occlusions during the image acquisition. This problem actually implies the image inpainting problem in computer vision and multimedia. To the best of our knowledge, all the existing image inpainting algorithms learn to repair the occluded regions for a better visualization quality, they are excellent for natural images but not good enough for geoscience images by ignoring the geoscience related tasks. This paper aims to repair the occluded regions for a better geoscience task performance with the advanced visualization quality simultaneously, without changing the current deployed deep learning based geoscience models. Because of the complex context of geoscience images, we propose a coarse-to-fine encoder-decoder network with coarse-to-fine adversarial context discriminators to reconstruct the occluded image regions. Due to the limited data of geoscience images, we use a MaskMix based data augmentation method to exploit more information from limited geoscience image data. The experimental results on three public geoscience datasets for remote sensing scene recognition, cross-view geolocation and semantic segmentation tasks respectively show the effectiveness and accuracy of the proposed method.
translated by 谷歌翻译
Neural Radiance Field (NeRF) has widely received attention in Sparse-View Computed Tomography (SVCT) reconstruction tasks as a self-supervised deep learning framework. NeRF-based SVCT methods represent the desired CT image as a continuous function of spatial coordinates and train a Multi-Layer Perceptron (MLP) to learn the function by minimizing loss on the SV sinogram. Benefiting from the continuous representation provided by NeRF, the high-quality CT image can be reconstructed. However, existing NeRF-based SVCT methods strictly suppose there is completely no relative motion during the CT acquisition because they require \textit{accurate} projection poses to model the X-rays that scan the SV sinogram. Therefore, these methods suffer from severe performance drops for real SVCT imaging with motion. In this work, we propose a self-calibrating neural field to recover the artifacts-free image from the rigid motion-corrupted SV sinogram without using any external data. Specifically, we parametrize the inaccurate projection poses caused by rigid motion as trainable variables and then jointly optimize these pose variables and the MLP. We conduct numerical experiments on a public CT image dataset. The results indicate our model significantly outperforms two representative NeRF-based methods for SVCT reconstruction tasks with four different levels of rigid motion.
translated by 谷歌翻译
在目前的工作中,我们提出了一个自制的坐标投影网络(范围),以通过解决逆断层扫描成像问题来从单个SV正弦图中重建无伪像的CT图像。与使用隐式神经代表网络(INR)解决类似问题的最新相关工作相比,我们的基本贡献是一种有效而简单的重新注射策略,可以将层析成像图像重建质量推向监督的深度学习CT重建工作。提出的策略是受线性代数与反问题之间的简单关系的启发。为了求解未确定的线性方程式系统,我们首先引入INR以通过图像连续性之前限制解决方案空间并实现粗糙解决方案。其次,我们建议生成一个密集的视图正式图,以改善线性方程系统的等级并产生更稳定的CT图像解决方案空间。我们的实验结果表明,重新投影策略显着提高了图像重建质量(至少为PSNR的+3 dB)。此外,我们将最近的哈希编码集成到我们的范围模型中,这极大地加速了模型培训。最后,我们评估并联和风扇X射线梁SVCT重建任务的范围。实验结果表明,所提出的范围模型优于两种基于INR的方法和两种受欢迎的监督DL方法。
translated by 谷歌翻译
在过去的十年中,在线教育在为全球学生提供负担得起的高质量教育方面的重要性越来越重要。随着越来越多的学生改用在线学习,这在全球大流行期间得到了进一步放大。大多数在线教育任务,例如课程建议,锻炼建议或自动化评估,都取决于跟踪学生的知识进步。这被称为文献中的\ emph {知识跟踪}问题。解决此问题需要收集学生评估数据,以反映他们的知识演变。在本文中,我们提出了一个新的知识跟踪数据集,名为“知识跟踪数据库”练习(DBE-KT22),该练习是在澳大利亚澳大利亚国立大学教授的课程中从在线学生锻炼系统中收集的。我们讨论了DBE-KT22数据集的特征,并将其与知识追踪文献中的现有数据集进行对比。我们的数据集可通过澳大利亚数据存档平台公开访问。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
预测中小型企业(SME)的破产风险(SME)是金融机构在做出贷款时的重要一步。但是,金融和AI研究领域的现有研究倾向于仅考虑企业内风险或传染性风险,而忽略了它们的相互作用和组合效应。这项研究首次考虑了在破产预测中的风险及其共同影响。具体而言,我们首先根据其风险内学习的统计学意义企业风险指标提出了企业内风险编码器。然后,我们根据企业关系信息从企业知识图中提出了一个企业传染风险编码器,以进行其传染风险嵌入。特别是,传染风险编码器既包括新提出的高图神经网络和异质图神经网络,这些神经网络可以在两个不同方面建模传播风险,即基于超系统的常见风险因素和直接扩散的风险。为了评估该模型,我们收集了SME上的现实世界多源数据数据,并构建了一个名为SMESD的新型基准数据集。我们提供对数据集的开放访问权限,该数据集有望进一步促进财务风险分析的研究。针对十二个最先进的基线的SMESD实验证明了拟议模型对破产预测的有效性。
translated by 谷歌翻译
股票运动预测(SMP)旨在预测上市公司的股份量股份,由于金融市场的挥发性,这是一个具有挑战性的任务。最近的财务研究表明,动量溢出效应在股票波动中发挥着重要作用。然而,以前的研究通常只学习相关公司之间的简单连接信息,这不可避免地未能模仿真实金融市场中上市公司的复杂关系。为了解决这个问题,我们首先建立一个更全面的市场知识图(MKG),其中包含有限的公司,包括上市公司及其相关的高管,以及包括明确关系和隐性关系的混合关系。之后,我们提出了一种新颖的双重关注网络,以了解基于构造的MKG用于库存预测的势头溢出信号。对九个SOTA基线构建数据集的实证实验表明,所提出的丹林公司能够改善与构造的MKG的库存预测。
translated by 谷歌翻译
双类型的异构图形应用于许多真实情景。然而,以前的异构图形学习研究通常忽略这种异构图中的双键入实体之间的复杂相互作用。为了解决这个问题,在本文中,我们提出了一种新的双重分层关注网络(DHAN),以了解与类内和级别的分层关注网络的双键入异构图中的综合节点表示。具体地,课堂上的注意力旨在从相同类型的邻居中学习节点表示,而级别的关注能够从其不同类型的邻居聚合节点表示。因此,双重关注操作使DHAN不仅能够充分地利用节点帧内邻近信息,而且可以在双键入的异构图中提供帧间相邻信息。关于针对最先进的各种任务的实验结果充分证实了DHAN在学习节点的学习节点综合陈述的能力
translated by 谷歌翻译
面部地标检测是具有许多重要应用的非常基本和重要的愿景任务。在实践中,面部地标检测可能受到大量自然降级的影响。最常见和最重要的降解之一是光源阻塞引起的阴影。虽然已经提出了许多先进的阴影去除方法来恢复近年来的图像质量,但它们对面部地标检测的影响并不具备很好的研究。例如,它仍然不清楚阴影去除是否可以增强面部地标检测的鲁棒性,以与不同的阴影模式。在这项工作中,为了第一次尝试,我们构建了一个新颖的基准,以将两个独立但相关任务联系起来(即阴影去除和面部地标检测)。特别是,所提出的基准覆盖具有不同强度,尺寸,形状和位置的不同面孔阴影。此外,对于对面部地标检测的挤出硬影模式,我们提出了一种新的方法(即,普发的阴影攻击),这使我们能够构建基准的具有挑战性的综合分析。通过构造的基准,我们对三个最先进的阴影清除方法和三个地标检测器进行了广泛的分析。这项工作的观察激励我们设计一种新颖的检测感知阴影拆除框架,使暗影去除以实现更高的恢复质量并增强部署的面部地标检测器的阴影稳健性。
translated by 谷歌翻译